443 research outputs found

    Recommender systems challenge 2014

    Get PDF

    Probabilistic Perspectives on Collecting Human Uncertainty in Predictive Data Mining

    Full text link
    In many areas of data mining, data is collected from humans beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built

    Replicable Evaluation of Recommender Systems

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in RecSys '15 Proceedings of the 9th ACM Conference on Recommender Systems, http://dx.doi.org/10.1145/2792838.2792841.Recommender systems research is by and large based on comparisons of recommendation algorithms’ predictive accuracies: the better the evaluation metrics (higher accuracy scores or lower predictive errors), the better the recommendation algorithm. Comparing the evaluation results of two recommendation approaches is however a difficult process as there are very many factors to be considered in the implementation of an algorithm, its evaluation, and how datasets are processed and prepared. This tutorial shows how to present evaluation results in a clear and concise manner, while ensuring that the results are comparable, replicable and unbiased. These insights are not limited to recommender systems research alone, but are also valid for experiments with other types of personalized interactions and contextual information access.Supported in part by the Ministerio de Educación y Ciencia (TIN2013-47090-C3-2)

    Comparative recommender system evaluation: Benchmarking recommendation frameworks

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in RecSys '14 Proceedings of the 8th ACM Conference on Recommender systems, http://dx.doi.org/10.1145/2645710.2645746Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.This work was partly carried out during the tenure of an ERCIM “Alain Bensoussan” Fellowship Programme. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements n◦246016 and n◦610594, and the Spanish Ministry of Science and Innovation (TIN2013-47090-C3-2

    Improving accountability in recommender systems research through reproducibility

    Full text link
    Reproducibility is a key requirement for scientific progress. It allows the reproduction of the works of others, and, as a consequence, to fully trust the reported claims and results. In this work, we argue that, by facilitating reproducibility of recommender systems experimentation, we indirectly address the issues of accountability and transparency in recommender systems research from the perspectives of practitioners, designers, and engineers aiming to assess the capabilities of published research works. These issues have become increasingly prevalent in recent literature. Reasons for this include societal movements around intelligent systems and artificial intelligence striving toward fair and objective use of human behavioral data (as in Machine Learning, Information Retrieval, or Human–Computer Interaction). Society has grown to expect explanations and transparency standards regarding the underlying algorithms making automated decisions for and around us. This work surveys existing definitions of these concepts and proposes a coherent terminology for recommender systems research, with the goal to connect reproducibility to accountability. We achieve this by introducing several guidelines and steps that lead to reproducible and, hence, accountable experimental workflows and research. We additionally analyze several instantiations of recommender system implementations available in the literature and discuss the extent to which they fit in the introduced framework. With this work, we aim to shed light on this important problem and facilitate progress in the field by increasing the accountability of researchThis work has been funded by the Ministerio de Ciencia, Innovación y Universidades (reference: PID2019-108965GB-I00

    Coherence and Inconsistencies in Rating Behavior - Estimating the Magic Barrier of Recommender Systems

    Full text link
    Recommender Systems have to deal with a wide variety of users and user types that express their preferences in di erent ways. This di erence in user behavior can have a profound impact on the performance of the recommender system. Users receive better (or worse) recommendations depending on the quantity and the quality of the information the system knows about them. Speci cally, the inconsistencies in users' preferences impose a lower bound on the error the system may achieve when predicting ratings for one particular user { this is referred to as the magic barrier. In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are a icted with inconsistencies { noise. Furthermore, we propose a measure of the consistency of user ratings (rating coherence) that predicts the performance of recommendation methods. More speci cally, we show that user coherence is correlated with the magic barrier; we exploit this correlation to discriminate between easy users (those with a lower magic barrier) and di cult ones (those with a higher magic barrier). We report experiments where the recommendation error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using two public datasets, where the necessary data to identify the magic barrier is not available, in which we obtain similar performance improvementsThis research was in part supported by the Spanish Ministry of Economy, Industry and Competitiveness (TIN2016-80630-P

    Benchmarking: A methodology for ensuring the relative quality of recommendation systems in software engineering

    Get PDF
    This chapter describes the concepts involved in the process of benchmarking of recommendation systems. Benchmarking of recommendation systems is used to ensure the quality of a research system or production system in comparison to other systems, whether algorithmically, infrastructurally, or according to any sought-after quality. Specifically, the chapter presents evaluation of recommendation systems according to recommendation accuracy, technical constraints, and business values in the context of a multi-dimensional benchmarking and evaluation model encompassing any number of qualities into a final comparable metric. The focus is put on quality measures related to recommendation accuracy, technical factors, and business values. The chapter first introduces concepts related to evaluation and benchmarking of recommendation systems, continues with an overview of the current state of the art, then presents the multi-dimensional approach in detail. The chapter concludes with a brief discussion of the introduced concepts and a summary

    β-Lactoglobulin-linoleate complexes: In vitro digestion and the role of protein in fatty acids uptake

    Get PDF
    peer-reviewedThe dairy protein β-lactoglobulin (BLG) is known to bind fatty acids such as the salt of the essential longchain fatty acid linoleic acid (cis,cis-9,12-octadecadienoic acid, n-6, 18:2). The aim of the current study was to investigate how bovine BLG-linoleate complexes, of various stoichiometry, affect the enzymatic digestion of BLG and the intracellular transport of linoleate into enterocyte-like monolayers. Duodenal and gastric digestions of the complexes indicated that BLG was hydrolyzed more rapidly when complexed with linoleate. Digested as well as undigested BLG-linoleate complexes reduced intracellular linoleate transport as compared with free linoleate. To investigate whether enteroendocrine cells perceive linoleate differently when part of a complex, the ability of linoleate to increase production or secretion of the enteroendocrine satiety hormone, cholecystokinin, was measured. Cholecystokinin mRNA levels were different when linoleate was presented to the cells alone or as part of a protein complex. In conclusion, understanding interactions between linoleate and BLG could help to formulate foods with targeted fatty acid bioaccessibility and, therefore, aid in the development of food matrices with optimal bioactive efficacyS. Le Maux is currently supported by a Teagasc Walsh Fellowship and the Department of Agriculture, Fisheries and Food (FIRM project 08/RD/TMFRC/650). We also acknowledge funding from IRCSET-Ulysses Travel Grant

    User-Item Reciprocity in Recommender Systems: Incentivizing the Crowd

    Get PDF
    Data consumption has changed significantly in the last 10 years. The digital revolution and the Internet has brought an abundance of information to users. Recommender systems are a popular means of finding content that is both relevant and personalized. However, today’s users require better recommender systems, able of producing continuous data feeds keeping up with their instantaneous and mobile needs. The CrowdRec project addresses this demand by providing context-aware, resource-combining, socially-informed, interactive and scalable recommendations. The key insight of CrowdRec is that, in order to achieve the dense, high-quality, timely information required for such systems, it is necessary to move from passive user data collection, to more active techniques fostering user engagement. For this purpose, CrowdRec activates the crowd, soliciting input and feedback from the wider communit
    corecore